An Adaptive Directional Metropolis-within-Gibbs algorithm

نویسنده

  • Yan Bai
چکیده

In this paper we propose a simple adaptive Metropolis-within-Gibbs algorithm attempting to study directions on which the Metropolis algorithm can be ran flexibly. The algorithm avoids the wasting moves in wrong directions by proposals from the full dimensional adaptive Metropolis algorithm. We also prove its ergodicity, and test it on a Gaussian Needle example and a real-life Case-Cohort study with competing risks. For the Cohort study, we describe an extensive version of Competing Risks Regression model, define censor variables for competing risks, and then apply the algorithm to estimate coefficients based on the posterior distribution. Classical Metropolis-within-Gibbs algorithms only propose values in the coordinates directions, and then accept or reject the values. When target distributions have strong correlations in some directions, the MCMC algorithm may not work very well especially on a high dimensional space, because many waste jumps are proposed. In this paper we propose a simple adaptive Metropoliswithin-Gibbs algorithm (ADMwG) attempting to study directions from historical data and jump in these directions. The effective directions are extracted from the empirical covariance matrix through singular value decomposition. Some sufficient conditions for ergodicity are given. We also apply the adaptive algorithm on a Gaussian Needle example and a real-life Case-Cohort study example with competing risks. For the Cohort study, an extensive version of Competing Risks Regression model is proposed, and then the algorithm is used to estimate coefficients based on the posterior distribution. See recent results about adaptive MCMC [3, 5, 2, 31, 36, 33, 7, 4, 15, 6]. In Section 1 we review Metropolis-Hastings algorithm, Metropolis-within-Gibbs sampler, and certain adaptive Metropolis algorithm. Their common ground is based on constructing reversible Markov Chain in each step. Although these algorithms are very successful for many target distributions, they can not work efficiently for some cases that either many wasting jumps are generated, or many moves with small sizes are generated by reason of the limitation from the jumping directions. The phenomenon is very explicit for high dimensional cases or some low dimensional extreme cases. A toy example will be presented in Section 2 for explanations. In Section 3 we propose ADMwG. The idea is similar to that of the Hit-and-Run algorithm. The framework of Hit-and-Run is to uniformly draw a random direction in the unit hypersphere, and then sample a scalar from some proposal distribution in the chosen direction, see the literature [10, 11, 17, 29, 12, 21, 23, 24, 25, 9]. ∗Department of Statistics, University of Toronto, Toronto, ON M5S 3G3, CA. [email protected]

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Approximating Bayes Estimates by Means of the Tierney Kadane, Importance Sampling and Metropolis-Hastings within Gibbs Methods in the Poisson-Exponential Distribution: A Comparative Study

Here, we work on the problem of point estimation of the parameters of the Poisson-exponential distribution through the Bayesian and maximum likelihood methods based on complete samples. The point Bayes estimates under the symmetric squared error loss (SEL) function are approximated using three methods, namely the Tierney Kadane approximation method, the importance sampling method and the Metrop...

متن کامل

Examples of Adaptive MCMC by Gareth

We investigate the use of adaptive MCMC algorithms to automatically tune the Markov chain parameters during a run. Examples include the Adaptive Metropolis (AM) multivariate algorithm of Haario et al. (2001), Metropolis-within-Gibbs algorithms for non-conjugate hierarchical models, regionally adjusted Metropolis algorithms, and logarithmic scalings. Computer simulations indicate that the algori...

متن کامل

Examples of Adaptive MCMC

We investigate the use of adaptive MCMC algorithms to automatically tune the Markov chain parameters during a run. Examples include the Adaptive Metropolis (AM) multivariate algorithm of Haario et al. (2001), Metropolis-within-Gibbs algorithms for non-conjugate hierarchical models, regionally adjusted Metropolis algorithms, and logarithmic scalings. Computer simulations indicate that the algori...

متن کامل

Examples of Adaptive MCMC by

We investigate the use of adaptive MCMC algorithms to automatically tune the Markov chain parameters during a run. Examples include the Adaptive Metropolis (AM) multivariate algorithm of Haario et al. (2001), Metropolis-within-Gibbs algorithms for non-conjugate hierarchical models, regionally adjusted Metropolis algorithms, and logarithmic scalings. Computer simulations indicate that the algori...

متن کامل

On adaptive Metropolis-Hastings methods

This paper presents a method for adaptation in Metropolis-Hastings algorithms. A product of a proposal density and K copies of the target density is used to define a joint density which is sampled by a Gibbs sampler including a Metropolis step. This provides a framework for adaptation since the current value of all K copies of the target distribution can be used in the proposal distribution. Th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009